15 research outputs found

    Time series prediction and forecasting using Deep learning Architectures

    Get PDF
    Nature brings time series data everyday and everywhere, for example, weather data, physiological signals and biomedical signals, financial and business recordings. Predicting the future observations of a collected sequence of historical observations is called time series forecasting. Forecasts are essential, considering the fact that they guide decisions in many areas of scientific, industrial and economic activity such as in meteorology, telecommunication, finance, sales and stock exchange rates. A massive amount of research has already been carried out by researchers over many years for the development of models to improve the time series forecasting accuracy. The major aim of time series modelling is to scrupulously examine the past observation of time series and to develop an appropriate model which elucidate the inherent behaviour and pattern existing in time series. The behaviour and pattern related to various time series may possess different conventions and infact requires specific countermeasures for modelling. Consequently, retaining the neural networks to predict a set of time series of mysterious domain remains particularly challenging. Time series forecasting remains an arduous problem despite the fact that there is substantial improvement in machine learning approaches. This usually happens due to some factors like, different time series may have different flattering behaviour. In real world time series data, the discriminative patterns residing in the time series are often distorted by random noise and affected by high-frequency perturbations. The major aim of this thesis is to contribute to the study and expansion of time series prediction and multistep ahead forecasting method based on deep learning algorithms. Time series forecasting using deep learning models is still in infancy as compared to other research areas for time series forecasting.Variety of time series data has been considered in this research. We explored several deep learning architectures on the sequential data, such as Deep Belief Networks (DBNs), Stacked AutoEncoders (SAEs), Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). Moreover, we also proposed two different new methods based on muli-step ahead forecasting for time series data. The comparison with state of the art methods is also exhibited. The research work conducted in this thesis makes theoretical, methodological and empirical contributions to time series prediction and multi-step ahead forecasting by using Deep Learning Architectures

    A Hybrid Approach for Time Series Forecasting Using Deep Learning and Nonlinear Autoregressive Neural Networks

    Get PDF
    During recent decades, several studies have been conducted in the field of weather forecasting providing various promising forecasting models. Nevertheless, the accuracy of the predictions still remains a challenge. In this paper a new forecasting approach is proposed: it implements a deep neural network based on a powerful feature extraction. The model is capable of deducing the irregular structure, non-linear trends and significant representations as features learnt from the data. It is a 6-layered deep architecture with 4 hidden units of Restricted Boltzmann Machine (RBM). The extracts from the last hidden layer are pre-processed, to support the accuracy achieved by the forecaster. The forecaster is a 2-layer ANN model with 35 hidden units for predicting the future intervals. It captures the correlations and regression patterns of the current sample related to the previous terms by using the learnt deep-hierarchal representations of data as an input to the forecaster

    Time Series Forecasting for Outdoor Temperature using Nonlinear Autoregressive Neural Network Models

    Get PDF
    Weather forecasting is a challenging time series forecasting problem because of its dynamic, continuous, data-intensive, chaotic and irregular behavior. At present, enormous time series forecasting techniques exist and are widely adapted. However, competitive research is still going on to improve the methods and techniques for accurate forecasting. This research article presents the time series forecasting of the metrological parameter, i.e., temperature with NARX (Nonlinear Autoregressive with eXogenous input) based ANN (Artificial Neural Network). In this research work, several time series dependent Recurrent NARX-ANN models are developed and trained with dynamic parameter settings to find the optimum network model according to its desired forecasting task. Network performance is analyzed on the basis of its Mean Square Error (MSE) value over training, validation and test data sets. In order to perform the forecasting for next 4,8 and 12 steps horizon, the model with less MSE is chosen to be the most accurate temperature forecaster. Unlike one step ahead prediction, multi-step ahead forecasting is more difficult and challenging problem to solve due to its underlying additional complexity. Thus, the empirical findings in this work provide valuable suggestions for the parameter settings of NARX model specifically the selection of hidden layer size and autoregressive lag terms in accordance with an appropriate multi-step ahead time series forecasting

    EEG Based Eye State Classification using Deep Belief Network and Stacked AutoEncoder

    Get PDF
    A Brain-Computer Interface (BCI) provides an alternative communication interface between the human brain and a computer. The Electroencephalogram (EEG) signals are acquired, processed and machine learning algorithms are further applied to extract useful information.  During  EEG acquisition,   artifacts  are induced due to involuntary eye movements or eye blink, casting adverse effects  on system performance. The aim of this research is to predict eye states from EEG signals using Deep learning architectures and present improved classifier models. Recent studies reflect that Deep Neural Networks are trending state of the art Machine learning approaches. Therefore, the current work presents the implementation of  Deep Belief Network (DBN) and Stacked AutoEncoders (SAE) as Classifiers with encouraging performance accuracy.  One of the designed  SAE models outperforms the  performance of DBN and the models presented in existing research by an impressive error rate of 1.1% on the test set bearing accuracy of 98.9%. The findings in this study,  may provide a contribution towards the state of  the  art performance on the problem of  EEG based eye state classification

    Management of Scratchpad Memory Using Programming Techniques

    Get PDF
    Consuming the conventional approaches, processors are incapable to achieve effective energy reduction. In upcoming processors on-chip memory system will be the major restriction. On-chip memories are managed by the software SMCs (Software Managed Chips), and are work with caches (on-chip), where inside a block of caches software can explicitly read as well as write specific or complete memory references, or work separately just like scratchpad memory. In embedded systems Scratch memory is generally used as an addition to caches or as a substitute of cache, but due to their comprehensive ease of programmability cache containing architectures are still to be chosen in numerous applications. In contrast to conventional caches in embedded schemes because of their better energy and silicon range effectiveness SPM (Scratch-Pad Memories) are being progressively used. Power consumption of ported applications can significantly be lower as well as portability of scratchpad architectures will be advanced with the language agnostic software management method which is suggested in this manuscript. To enhance the memory configuration and optimization on relevant architectures based on SPM, the variety of current methods are reviewed for finding the chances of optimizations and usage of new methods as well as their applicability to numerous schemes of memory management are also discussed in this paper

    Performance evaluation of interactive video streaming over WiMAX network

    Get PDF
    Nowadays, the desire of internet access and the need of digital encodings have influenced quite a large number of users to access high quality video application. Offering multimedia services not only to the wired but to wireless mobile client is becoming more viable. In wireless medium, video-streaming still has high resource requirements, for example, bandwidth, traffic priority, smooth play-backs. Therefore, bandwidth demands of these applications are far exceeding the capacity of 3G and Wireless Local Area Networks (LANs). The current research demonstrates the introductory understanding of the Worldwide Interoperability for Microwave Access (WiMax) network, applications, the mechanisms, its potential features, and techniques used to provide QoS in WiMAX, and lastly the network is simulated  to report the diverse requirements of streamed video conferencing traffic and its specifications. For this purpose two input parameters of video traffic are selected, i.e, refresh rate, which is monitored in terms of frames per second and pixel resolutions which basically counts the number of pixels in digital imaging. The network model is developed in OPNET. Different outcomes from simulation based models are analyzed and appropriate reasons are also discussed. Apart from this, the second aim of the current research is to address whether WiMAX access technology for streaming video applications could provide comparable network performance to Asymmetric Digital Subscriber Line (ADSL). For this purpose network metrices such as End to End delay and throughput is taken into consideration for optimization

    A Predictive Model of Artificial Neural Network for Fuel Consumption in Engine Control System

    No full text
    This paper presents analyses and test results of engine management system's operational architecture with an artificial neural network (ANN). The research involved several steps of investigation: theory, a stand test of the engine, training of ANN with test data, generated from the proposed engine control system to predict the future values of fuel consumption before calculating the engine speed. In our paper, we study a small size 1.5 liter gasoline engine without direct fuel injection (injection in intake manifold). The purpose of this study is to simplify engine and vehicle integration processes, decrease exhaust gas volume, decrease fuel consumption by optimizing cam timing and spark timing, and improve engine mechatronic functioning. The method followed in this work is applicable to small/medium size gasoline/diesel engines.The results show that the developed model achieved good accuracy on predicting the future demand of fuel consumption for engine control unit (ECU). It yields with the error rate of 1.12e-6 measeured as Mean Square Error (MSE) on unseen samples

    Internet of Plants Application for Smart Agriculture

    No full text
    Nowadays, Internet of Things (IoT) is receiving a great attention due to its potential strength and ability to be integrated into any complex system. The IoT provides the acquired data from the environment to the Internet through the service providers. This further helps users to view the numerical or plotted data. In addition, it also allows objects which are located in long distances to be sensed and controlled remotely through embedded devices which are important in agriculture domain. Developing such a system for the IoT is a very complex task due to the diverse variety of devices, link layer technologies, and services. This paper proposes a practical approach to acquiring data of temperature, humidity and soil moisture of plants. In order to accomplish this, we developed a prototype device and an android application which acquires physical data and sends it to cloud. Moreover, in the subsequent part of current research work, we have focused towards a temperature forecasting application. Forecasting metrological parameters have a profound influence on crop growth, development and yields of agriculture. In response to this fact, an application is developed for 10 days ahead maximum and minimum temperatures forecasting using a type of recurrent neural network
    corecore